Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Clin Park Relat Disord ; 10: 100251, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38645305

RESUMEN

Introduction: Given the unique natural history of GBA-related Parkinson's disease (GBA-PD) and the potential for novel treatments in this population, genetic testing prioritization for the identification of GBA-PD patients is crucial for prognostication, individualizing treatment, and stratification for clinical trials. Assessing the predictive value of certain clinical traits for the GBA-variant carrier status will help target genetic testing in clinical settings where cost and access limit its availability. Methods: In-depth clinical characterization through standardized rating scales for motor and non-motor symptoms and self-reported binomial information of a cohort of subjects with PD (n = 100) from our center and from the larger cohort of the Parkinson's Progression Marker Initiative (PPMI) was utilized to evaluate the predictive values of clinical traits for GBA variant carrier status. The model was cross-validated across the two cohorts. Results: Leveraging non-motor symptoms of PD, we established successful discrimination of GBA variants in the PPMI cohort and study cohort (AUC 0.897 and 0.738, respectively). The PPMI cohort model successfully generalized to the study cohort data using both MDS-UPDRS scores and binomial data (AUC 0.740 and 0.734, respectively) while the study cohort model did not. Conclusions: We assessed the predictive value of non-motor symptoms of PD for identifying GBA carrier status in the general PD population. These data can be used to determine a simple, clinically oriented model using either the MDS-UPDRS or subjective symptom reporting from patients. Our results can inform patient counseling about the expected carrier risk and test prioritization for the expected identification of GBA variants.

2.
bioRxiv ; 2024 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-38559163

RESUMEN

Objective: This study investigates speech decoding from neural signals captured by intracranial electrodes. Most prior works can only work with electrodes on a 2D grid (i.e., Electrocorticographic or ECoG array) and data from a single patient. We aim to design a deep-learning model architecture that can accommodate both surface (ECoG) and depth (stereotactic EEG or sEEG) electrodes. The architecture should allow training on data from multiple participants with large variability in electrode placements and the trained model should perform well on participants unseen during training. Approach: We propose a novel transformer-based model architecture named SwinTW that can work with arbitrarily positioned electrodes, by leveraging their 3D locations on the cortex rather than their positions on a 2D grid. We train both subject-specific models using data from a single participant as well as multi-patient models exploiting data from multiple participants. Main Results: The subject-specific models using only low-density 8x8 ECoG data achieved high decoding Pearson Correlation Coefficient with ground truth spectrogram (PCC=0.817), over N=43 participants, outperforming our prior convolutional ResNet model and the 3D Swin transformer model. Incorporating additional strip, depth, and grid electrodes available in each participant (N=39) led to further improvement (PCC=0.838). For participants with only sEEG electrodes (N=9), subject-specific models still enjoy comparable performance with an average PCC=0.798. The multi-subject models achieved high performance on unseen participants, with an average PCC=0.765 in leave-one-out cross-validation. Significance: The proposed SwinTW decoder enables future speech neuroprostheses to utilize any electrode placement that is clinically optimal or feasible for a particular participant, including using only depth electrodes, which are more routinely implanted in chronic neurosurgical procedures. Importantly, the generalizability of the multi-patient models suggests the exciting possibility of developing speech neuroprostheses for people with speech disability without relying on their own neural data for training, which is not always feasible.

3.
Brain Commun ; 6(2): fcae053, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38505231

RESUMEN

Cortical regions supporting speech production are commonly established using neuroimaging techniques in both research and clinical settings. However, for neurosurgical purposes, structural function is routinely mapped peri-operatively using direct electrocortical stimulation. While this method is the gold standard for identification of eloquent cortical regions to preserve in neurosurgical patients, there is lack of specificity of the actual underlying cognitive processes being interrupted. To address this, we propose mapping the temporal dynamics of speech arrest across peri-sylvian cortices by quantifying the latency between stimulation and speech deficits. In doing so, we are able to substantiate hypotheses about distinct region-specific functional roles (e.g. planning versus motor execution). In this retrospective observational study, we analysed 20 patients (12 female; age range 14-43) with refractory epilepsy who underwent continuous extra-operative intracranial EEG monitoring of an automatic speech task during clinical bedside language mapping. Latency to speech arrest was calculated as time from stimulation onset to speech arrest onset, controlling for individual speech rate. Most instances of motor-based arrest (87.5% of 96 instances) were in sensorimotor cortex with mid-range latencies to speech arrest with a distributional peak at 0.47 s. Speech arrest occurred in numerous regions, with relatively short latencies in supramarginal gyrus (0.46 s), superior temporal gyrus (0.51 s) and middle temporal gyrus (0.54 s), followed by relatively long latencies in sensorimotor cortex (0.72 s) and especially long latencies in inferior frontal gyrus (0.95 s). Non-parametric testing for speech arrest revealed that region predicted latency; latencies in supramarginal gyrus and in superior temporal gyrus were shorter than in sensorimotor cortex and in inferior frontal gyrus. Sensorimotor cortex is primarily responsible for motor-based arrest. Latencies to speech arrest in supramarginal gyrus and superior temporal gyrus (and to a lesser extent middle temporal gyrus) align with latencies to motor-based arrest in sensorimotor cortex. This pattern of relatively quick cessation of speech suggests that stimulating these regions interferes with the outgoing motor execution. In contrast, the latencies to speech arrest in inferior frontal gyrus and in ventral regions of sensorimotor cortex were significantly longer than those in temporoparietal regions. Longer latencies in the more frontal areas (including inferior frontal gyrus and ventral areas of precentral gyrus and postcentral gyrus) suggest that stimulating these areas interrupts a higher-level speech production process involved in planning. These results implicate the ventral specialization of sensorimotor cortex (including both precentral and postcentral gyri) for speech planning above and beyond motor execution.

4.
Nat Commun ; 15(1): 2768, 2024 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-38553456

RESUMEN

Contextual embeddings, derived from deep language models (DLMs), provide a continuous vectorial representation of language. This embedding space differs fundamentally from the symbolic representations posited by traditional psycholinguistics. We hypothesize that language areas in the human brain, similar to DLMs, rely on a continuous embedding space to represent language. To test this hypothesis, we densely record the neural activity patterns in the inferior frontal gyrus (IFG) of three participants using dense intracranial arrays while they listened to a 30-minute podcast. From these fine-grained spatiotemporal neural recordings, we derive a continuous vectorial representation for each word (i.e., a brain embedding) in each patient. Using stringent zero-shot mapping we demonstrate that brain embeddings in the IFG and the DLM contextual embedding space have common geometric patterns. The common geometric patterns allow us to predict the brain embedding in IFG of a given left-out word based solely on its geometrical relationship to other non-overlapping words in the podcast. Furthermore, we show that contextual embeddings capture the geometry of IFG embeddings better than static word embeddings. The continuous brain embedding space exposes a vector-based neural code for natural language processing in the human brain.


Asunto(s)
Encéfalo , Lenguaje , Humanos , Corteza Prefrontal , Procesamiento de Lenguaje Natural
5.
bioRxiv ; 2024 Feb 07.
Artículo en Inglés | MEDLINE | ID: mdl-38370843

RESUMEN

Across the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive. To address this issue, we obtained intracranial electroencephalography (iEEG) recordings from 35 neurosurgical participants during speech production. We first characterized the detailed topography of auditory suppression, which varied across superior temporal gyrus (STG). Next, we performed a delayed auditory feedback (DAF) task to determine whether the suppressed sites were also sensitive to auditory feedback alterations. Indeed, overlapping sites showed enhanced responses to feedback, indicating sensitivity. Importantly, there was a strong correlation between the degree of auditory suppression and feedback sensitivity, suggesting suppression might be a key mechanism that underlies speech monitoring. Further, we found that when participants produced speech with simultaneous auditory feedback, posterior STG was selectively activated if participants were engaged in a DAF paradigm, suggesting that increased attentional load can modulate auditory feedback sensitivity.

6.
bioRxiv ; 2024 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-37745363

RESUMEN

Cortical regions supporting speech production are commonly established using neuroimaging techniques in both research and clinical settings. However, for neurosurgical purposes, structural function is routinely mapped peri-operatively using direct electrocortical stimulation. While this method is the gold standard for identification of eloquent cortical regions to preserve in neurosurgical patients, there is lack of specificity of the actual underlying cognitive processes being interrupted. To address this, we propose mapping the temporal dynamics of speech arrest across peri-sylvian cortices by quantifying the latency between stimulation and speech deficits. In doing so, we are able to substantiate hypotheses about distinct region-specific functional roles (e.g., planning versus motor execution). In this retrospective observational study, we analyzed 20 patients (12 female; age range 14-43) with refractory epilepsy who underwent continuous extra-operative intracranial EEG monitoring of an automatic speech task during clinical bedside language mapping. Latency to speech arrest was calculated as time from stimulation onset to speech arrest onset, controlling for individual speech rate. Most instances of motor-based arrest (87.5% of 96 instances) were in sensorimotor cortex with mid-range latencies to speech arrest with a distributional peak at 0.47 seconds. Speech arrest occurred in numerous regions, with relatively short latencies in supramarginal gyrus (0.46 seconds), superior temporal gyrus (0.51 seconds), and middle temporal gyrus (0.54 seconds), followed by relatively long latencies in sensorimotor cortex (0.72 seconds) and especially long latencies in inferior frontal gyrus (0.95 seconds). Nonparametric testing for speech arrest revealed that region predicted latency; latencies in supramarginal gyrus and in superior temporal gyrus were shorter than in sensorimotor cortex and in inferior frontal gyrus. Sensorimotor cortex is primarily responsible for motor-based arrest. Latencies to speech arrest in supramarginal gyrus and superior temporal gyrus (and to a lesser extent middle temporal gyrus) align with latencies to motor-based arrest in sensorimotor cortex. This pattern of relatively quick cessation of speech suggests that stimulating these regions interferes with the outgoing motor execution. In contrast, the latencies to speech arrest in inferior frontal gyrus and in ventral regions of sensorimotor cortex were significantly longer than those in temporoparietal regions. Longer latencies in the more frontal areas (including inferior frontal gyrus and ventral areas of precentral gyrus and postcentral gyrus) suggest that stimulating these areas interrupts a higher-level speech production process involved in planning. These results implicate the ventral specialization of sensorimotor cortex (including both precentral and postcentral gyri) for speech planning above and beyond motor execution.

7.
bioRxiv ; 2024 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-37745548

RESUMEN

Neural responses in visual cortex adapt to prolonged and repeated stimuli. While adaptation occurs across the visual cortex, it is unclear how adaptation patterns and computational mechanisms differ across the visual hierarchy. Here we characterize two signatures of short-term neural adaptation in time-varying intracranial electroencephalography (iEEG) data collected while participants viewed naturalistic image categories varying in duration and repetition interval. Ventral- and lateral-occipitotemporal cortex exhibit slower and prolonged adaptation to single stimuli and slower recovery from adaptation to repeated stimuli compared to V1-V3. For category-selective electrodes, recovery from adaptation is slower for preferred than non-preferred stimuli. To model neural adaptation we augment our delayed divisive normalization (DN) model by scaling the input strength as a function of stimulus category, enabling the model to accurately predict neural responses across multiple image categories. The model fits suggest that differences in adaptation patterns arise from slower normalization dynamics in higher visual areas interacting with differences in input strength resulting from category selectivity. Our results reveal systematic differences in temporal adaptation of neural population responses across the human visual hierarchy and show that a single computational model of history-dependent normalization dynamics, fit with area-specific parameters, accounts for these differences.

8.
Proc Natl Acad Sci U S A ; 120(42): e2300255120, 2023 10 17.
Artículo en Inglés | MEDLINE | ID: mdl-37819985

RESUMEN

Speech production is a complex human function requiring continuous feedforward commands together with reafferent feedback processing. These processes are carried out by distinct frontal and temporal cortical networks, but the degree and timing of their recruitment and dynamics remain poorly understood. We present a deep learning architecture that translates neural signals recorded directly from the cortex to an interpretable representational space that can reconstruct speech. We leverage learned decoding networks to disentangle feedforward vs. feedback processing. Unlike prevailing models, we find a mixed cortical architecture in which frontal and temporal networks each process both feedforward and feedback information in tandem. We elucidate the timing of feedforward and feedback-related processing by quantifying the derived receptive fields. Our approach provides evidence for a surprisingly mixed cortical architecture of speech circuitry together with decoding advances that have important implications for neural prosthetics.


Asunto(s)
Habla , Lóbulo Temporal , Humanos , Retroalimentación , Estimulación Acústica
9.
bioRxiv ; 2023 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-37745380

RESUMEN

Decoding human speech from neural signals is essential for brain-computer interface (BCI) technologies restoring speech function in populations with neurological deficits. However, it remains a highly challenging task, compounded by the scarce availability of neural signals with corresponding speech, data complexity, and high dimensionality, and the limited publicly available source code. Here, we present a novel deep learning-based neural speech decoding framework that includes an ECoG Decoder that translates electrocorticographic (ECoG) signals from the cortex into interpretable speech parameters and a novel differentiable Speech Synthesizer that maps speech parameters to spectrograms. We develop a companion audio-to-audio auto-encoder consisting of a Speech Encoder and the same Speech Synthesizer to generate reference speech parameters to facilitate the ECoG Decoder training. This framework generates natural-sounding speech and is highly reproducible across a cohort of 48 participants. Among three neural network architectures for the ECoG Decoder, the 3D ResNet model has the best decoding performance (PCC=0.804) in predicting the original speech spectrogram, closely followed by the SWIN model (PCC=0.796). Our experimental results show that our models can decode speech with high correlation even when limited to only causal operations, which is necessary for adoption by real-time neural prostheses. We successfully decode speech in participants with either left or right hemisphere coverage, which could lead to speech prostheses in patients with speech deficits resulting from left hemisphere damage. Further, we use an occlusion analysis to identify cortical regions contributing to speech decoding across our models. Finally, we provide open-source code for our two-stage training pipeline along with associated preprocessing and visualization tools to enable reproducible research and drive research across the speech science and prostheses communities.

10.
bioRxiv ; 2023 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-37425747

RESUMEN

Effective communication hinges on a mutual understanding of word meaning in different contexts. The embedding space learned by large language models can serve as an explicit model of the shared, context-rich meaning space humans use to communicate their thoughts. We recorded brain activity using electrocorticography during spontaneous, face-to-face conversations in five pairs of epilepsy patients. We demonstrate that the linguistic embedding space can capture the linguistic content of word-by-word neural alignment between speaker and listener. Linguistic content emerged in the speaker's brain before word articulation, and the same linguistic content rapidly reemerged in the listener's brain after word articulation. These findings establish a computational framework to study how human brains transmit their thoughts to one another in real-world contexts.

11.
bioRxiv ; 2023 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-36865223

RESUMEN

Neuronal oscillations at about 10 Hz, called alpha oscillations, are often thought to arise from synchronous activity across occipital cortex, reflecting general cognitive states such as arousal and alertness. However, there is also evidence that modulation of alpha oscillations in visual cortex can be spatially specific. Here, we used intracranial electrodes in human patients to measure alpha oscillations in response to visual stimuli whose location varied systematically across the visual field. We separated the alpha oscillatory power from broadband power changes. The variation in alpha oscillatory power with stimulus position was then fit by a population receptive field (pRF) model. We find that the alpha pRFs have similar center locations to pRFs estimated from broadband power (70-180 Hz), but are several times larger. The results demonstrate that alpha suppression in human visual cortex can be precisely tuned. Finally, we show how the pattern of alpha responses can explain several features of exogenous visual attention. Significance Statement: The alpha oscillation is the largest electrical signal generated by the human brain. An important question in systems neuroscience is the degree to which this oscillation reflects system-wide states and behaviors such as arousal, alertness, and attention, versus much more specific functions in the routing and processing of information. We examined alpha oscillations at high spatial precision in human patients with intracranial electrodes implanted over visual cortex. We discovered a surprisingly high spatial specificity of visually driven alpha oscillations, which we quantified with receptive field models. We further use our discoveries about properties of the alpha response to show a link between these oscillations and the spread of visual attention.

12.
J Neurosci ; 42(40): 7562-7580, 2022 10 05.
Artículo en Inglés | MEDLINE | ID: mdl-35999054

RESUMEN

Neural responses to visual stimuli exhibit complex temporal dynamics, including subadditive temporal summation, response reduction with repeated or sustained stimuli (adaptation), and slower dynamics at low contrast. These phenomena are often studied independently. Here, we demonstrate these phenomena within the same experiment and model the underlying neural computations with a single computational model. We extracted time-varying responses from electrocorticographic recordings from patients presented with stimuli that varied in duration, interstimulus interval (ISI) and contrast. Aggregating data across patients from both sexes yielded 98 electrodes with robust visual responses, covering both earlier (V1-V3) and higher-order (V3a/b, LO, TO, IPS) retinotopic maps. In all regions, the temporal dynamics of neural responses exhibit several nonlinear features. Peak response amplitude saturates with high contrast and longer stimulus durations, the response to a second stimulus is suppressed for short ISIs and recovers for longer ISIs, and response latency decreases with increasing contrast. These features are accurately captured by a computational model composed of a small set of canonical neuronal operations, that is, linear filtering, rectification, exponentiation, and a delayed divisive normalization. We find that an increased normalization term captures both contrast- and adaptation-related response reductions, suggesting potentially shared underlying mechanisms. We additionally demonstrate both changes and invariance in temporal response dynamics between earlier and higher-order visual areas. Together, our results reveal the presence of a wide range of temporal and contrast-dependent neuronal dynamics in the human visual cortex and demonstrate that a simple model captures these dynamics at millisecond resolution.SIGNIFICANCE STATEMENT Sensory inputs and neural responses change continuously over time. It is especially challenging to understand a system that has both dynamic inputs and outputs. Here, we use a computational modeling approach that specifies computations to convert a time-varying input stimulus to a neural response time course, and we use this to predict neural activity measured in the human visual cortex. We show that this computational model predicts a wide variety of complex neural response shapes, which we induced experimentally by manipulating the duration, repetition, and contrast of visual stimuli. By comparing data and model predictions, we uncover systematic properties of temporal dynamics of neural signals, allowing us to better understand how the brain processes dynamic sensory information.


Asunto(s)
Encéfalo , Corteza Visual , Masculino , Femenino , Humanos , Estimulación Luminosa/métodos , Encéfalo/fisiología , Mapeo Encefálico/métodos , Factores de Tiempo , Corteza Visual/fisiología
13.
Nat Neurosci ; 25(3): 369-380, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35260860

RESUMEN

Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language.


Asunto(s)
Lenguaje , Lingüística , Encéfalo/fisiología , Humanos
14.
Nat Hum Behav ; 6(3): 455-469, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35145280

RESUMEN

To derive meaning from sound, the brain must integrate information across many timescales. What computations underlie multiscale integration in human auditory cortex? Evidence suggests that auditory cortex analyses sound using both generic acoustic representations (for example, spectrotemporal modulation tuning) and category-specific computations, but the timescales over which these putatively distinct computations integrate remain unclear. To answer this question, we developed a general method to estimate sensory integration windows-the time window when stimuli alter the neural response-and applied our method to intracranial recordings from neurosurgical patients. We show that human auditory cortex integrates hierarchically across diverse timescales spanning from ~50 to 400 ms. Moreover, we find that neural populations with short and long integration windows exhibit distinct functional properties: short-integration electrodes (less than ~200 ms) show prominent spectrotemporal modulation selectivity, while long-integration electrodes (greater than ~200 ms) show prominent category selectivity. These findings reveal how multiscale integration organizes auditory computation in the human brain.


Asunto(s)
Corteza Auditiva , Estimulación Acústica/métodos , Percepción Auditiva , Encéfalo , Mapeo Encefálico/métodos , Humanos
15.
PLoS Biol ; 20(2): e3001493, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35113857

RESUMEN

Hearing one's own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Retroalimentación Sensorial/fisiología , Percepción del Habla/fisiología , Adulto , Electrocorticografía , Epilepsia/cirugía , Femenino , Humanos , Masculino , Corteza Motora/fisiología , Habla/fisiología
16.
Nat Commun ; 12(1): 6288, 2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34725348

RESUMEN

Perception results from the interplay of sensory input and prior knowledge. Despite behavioral evidence that long-term priors powerfully shape perception, the neural mechanisms underlying these interactions remain poorly understood. We obtained direct cortical recordings in neurosurgical patients as they viewed ambiguous images that elicit constant perceptual switching. We observe top-down influences from the temporal to occipital cortex, during the preferred percept that is congruent with the long-term prior. By contrast, stronger feedforward drive is observed during the non-preferred percept, consistent with a prediction error signal. A computational model based on hierarchical predictive coding and attractor networks reproduces all key experimental findings. These results suggest a pattern of large-scale information flow change underlying long-term priors' influence on perception and provide constraints on theories about long-term priors' influence on perception.


Asunto(s)
Retroalimentación Sensorial , Percepción Visual , Adulto , Femenino , Humanos , Masculino , Corteza Visual/diagnóstico por imagen , Corteza Visual/fisiología , Adulto Joven
17.
Nat Commun ; 12(1): 5394, 2021 09 13.
Artículo en Inglés | MEDLINE | ID: mdl-34518520

RESUMEN

Humans form lasting memories of stimuli that were only encountered once. This naturally occurs when listening to a story, however it remains unclear how and when memories are stored and retrieved during story-listening. Here, we first confirm in behavioral experiments that participants can learn about the structure of a story after a single exposure and are able to recall upcoming words when the story is presented again. We then track mnemonic information in high frequency activity (70-200 Hz) as patients undergoing electrocorticographic recordings listen twice to the same story. We demonstrate predictive recall of upcoming information through neural responses in auditory processing regions. This neural measure correlates with behavioral measures of event segmentation and learning. Event boundaries are linked to information flow from cortex to hippocampus. When listening for a second time, information flow from hippocampus to cortex precedes moments of predictive recall. These results provide insight on a fine-grained temporal scale into how episodic memory encoding and retrieval work under naturalistic conditions.


Asunto(s)
Corteza Cerebral/fisiología , Electrocorticografía/métodos , Hipocampo/fisiología , Aprendizaje/fisiología , Recuerdo Mental/fisiología , Adolescente , Adulto , Algoritmos , Mapeo Encefálico/métodos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Modelos Neurológicos , Adulto Joven
18.
Neuron ; 109(13): 2047-2074, 2021 07 07.
Artículo en Inglés | MEDLINE | ID: mdl-34237278

RESUMEN

Despite increased awareness of the lack of gender equity in academia and a growing number of initiatives to address issues of diversity, change is slow, and inequalities remain. A major source of inequity is gender bias, which has a substantial negative impact on the careers, work-life balance, and mental health of underrepresented groups in science. Here, we argue that gender bias is not a single problem but manifests as a collection of distinct issues that impact researchers' lives. We disentangle these facets and propose concrete solutions that can be adopted by individuals, academic institutions, and society.


Asunto(s)
Equidad de Género , Investigadores , Sexismo , Universidades/organización & administración , Femenino , Humanos , Masculino , Investigación/organización & administración
19.
Front Hum Neurosci ; 15: 661976, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33935673

RESUMEN

Functional human brain mapping is commonly performed during invasive monitoring with intracranial electroencephalographic (iEEG) electrodes prior to resective surgery for drug- resistant epilepsy. The current gold standard, electrocortical stimulation mapping (ESM), is time -consuming, sometimes elicits pain, and often induces after discharges or seizures. Moreover, there is a risk of overestimating eloquent areas due to propagation of the effects of stimulation to a broader network of language cortex. Passive iEEG spatial-temporal functional mapping (STFM) has recently emerged as a potential alternative to ESM. However, investigators have observed less correspondence between STFM and ESM maps of language than between their maps of motor function. We hypothesized that incongruities between ESM and STFM of language function may arise due to propagation of the effects of ESM to cortical areas having strong effective connectivity with the site of stimulation. We evaluated five patients who underwent invasive monitoring for seizure localization, whose language areas were identified using ESM. All patients performed a battery of language tasks during passive iEEG recordings. To estimate the effective connectivity of stimulation sites with a broader network of task-activated cortical sites, we measured cortico-cortical evoked potentials (CCEPs) elicited across all recording sites by single-pulse electrical stimulation at sites where ESM was performed at other times. With the combination of high gamma power as well as CCEPs results, we trained a logistic regression model to predict ESM results at individual electrode pairs. The average accuracy of the classifier using both STFM and CCEPs results combined was 87.7%, significantly higher than the one using STFM alone (71.8%), indicating that the correspondence between STFM and ESM results is greater when effective connectivity between ESM stimulation sites and task-activated sites is taken into consideration. These findings, though based on a small number of subjects to date, provide preliminary support for the hypothesis that incongruities between ESM and STFM may arise in part from propagation of stimulation effects to a broader network of cortical language sites activated by language tasks, and suggest that more studies, with larger numbers of patients, are needed to understand the utility of both mapping techniques in clinical practice.

20.
Brain ; 144(5): 1590-1602, 2021 06 22.
Artículo en Inglés | MEDLINE | ID: mdl-33889945

RESUMEN

We describe the spatiotemporal course of cortical high-gamma activity, hippocampal ripple activity and interictal epileptiform discharges during an associative memory task in 15 epilepsy patients undergoing invasive EEG. Successful encoding trials manifested significantly greater high-gamma activity in hippocampus and frontal regions. Successful cued recall trials manifested sustained high-gamma activity in hippocampus compared to failed responses. Hippocampal ripple rates were greater during successful encoding and retrieval trials. Interictal epileptiform discharges during encoding were associated with 15% decreased odds of remembering in hippocampus (95% confidence interval 6-23%). Hippocampal interictal epileptiform discharges during retrieval predicted 25% decreased odds of remembering (15-33%). Odds of remembering were reduced by 25-52% if interictal epileptiform discharges occurred during the 500-2000 ms window of encoding or by 41% during retrieval. During encoding and retrieval, hippocampal interictal epileptiform discharges were followed by a transient decrease in ripple rate. We hypothesize that interictal epileptiform discharges impair associative memory in a regionally and temporally specific manner by decreasing physiological hippocampal ripples necessary for effective encoding and recall. Because dynamic memory impairment arises from pathological interictal epileptiform discharge events competing with physiological ripples, interictal epileptiform discharges represent a promising therapeutic target for memory remediation in patients with epilepsy.


Asunto(s)
Epilepsia/fisiopatología , Hipocampo/fisiopatología , Recuerdo Mental/fisiología , Adolescente , Adulto , Electrocorticografía , Epilepsia/complicaciones , Femenino , Humanos , Masculino , Trastornos de la Memoria/etiología , Trastornos de la Memoria/fisiopatología , Persona de Mediana Edad , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...